Goto

Collaborating Authors

 node pair




Supplemental Material: CHIP: AHawkes Process Model for Continuous-time Networkswith Scalable and Consistent Estimation

Neural Information Processing Systems

A.1 CommunityDetection The spectral clustering algorithm for directed networks that we consider in this paper is shown in Algorithm A.1. It can be applied either to the weighted adjacency (count) matrixN or the unweighted adjacency matrixA, where Aij =1{Nij >0} and 1{ } denotes the indicator function of the argument. This algorithm is used for the community detection step in our proposed CHIP estimationprocedure. For undirectednetworks, which we use for the theoreticalanalysisin Section 4, spectral clustering is performed by running k-means clustering on the rows of theeigenvector matrix of N or A, not the rows of the concatenated singular vector matrix. A.2 Estimation of Hawkes process parameters Ozaki (1979) derived the log-likelihood function for Hawkes processes with exponential kernels, which takes the form: logL= µT+ The threeparameters µ,α,β can be estimatedby maximizing (A.1) using standard numerical methods for non-linear optimization (Nocedal & Wright, 2006). We provide closed-form equations for estimating mab =αab/βab and µab in (2).






Provable Training for Graph Contrastive Learning

Neural Information Processing Systems

Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels. Despite the key principle that maximizing the similarity between positive node pairs while minimizing it between negative node pairs is well established, some fundamental problems are still unclear. Considering the complex graph structure, are some nodes consistently well-trained and following this principle even with different graph augmentations? Or are there some nodes more likely to be untrained across graph augmentations and violate the principle? How to distinguish these nodes and further guide the training of GCL?


Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle Counting Power

Neural Information Processing Systems

The ability of graph neural networks (GNNs) to count certain graph substructures, especially cycles, is important for the success of GNNs on a wide range of tasks. It has been recently used as a popular metric for evaluating the expressive power of GNNs. Many of the proposed GNN models with provable cycle counting power are based on subgraph GNNs, i.e., extracting a bag of subgraphs from the input graph, generating representations for each subgraph, and using them to augment the representation of the input graph. However, those methods require heavy preprocessing, and suffer from high time and memory costs. In this paper, we overcome the aforementioned limitations of subgraph GNNs by proposing a novel class of GNNs---$d$-Distance-Restricted FWL(2) GNNs, or $d$-DRFWL(2) GNNs, based on the well-known FWL(2) algorithm.


Mixture of Link Predictors on Graphs

Neural Information Processing Systems

Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task in graph machine learning. Heuristic methods, leveraging a range of different pairwise measures such as common neighbors and shortest paths, often rival the performance of vanilla Graph Neural Networks (GNNs). Therefore, recent advancements in GNNs for link prediction (GNN4LP) have primarily focused on integrating one or a few types of pairwise information. In this work, we reveal that different node pairs within the same dataset necessitate varied pairwise information for accurate prediction and models that only apply the same pairwise information uniformly could achieve suboptimal performance.As a result, we propose a simple mixture of experts model Link-MoE for link prediction. Link-MoE utilizes various GNNs as experts and strategically selects the appropriate expert for each node pair based on various types of pairwise information. Experimental results across diverse real-world datasets demonstrate substantial performance improvement from Link-MoE. Notably, Link-Mo achieves a relative improvement of 18.71% on the MRR metric for the Pubmed dataset and 9.59% on the Hits@100 metric for the ogbl-ppa dataset, compared to the best baselines. The code is available at https://github.com/ml-ml/Link-MoE/.